Convergence Rates of the Heavy Ball Method for Quasi-strongly Convex Optimization
نویسندگان
چکیده
In this paper, we study the behavior of solutions ODE associated to heavy ball method. Since pioneering work B. T. Polyak in 1964, it has been well known that such a scheme is very efficient for $C^2$ strongly convex functions with Lipschitz gradient. But much less when assumption dropped. Depending on geometry function minimize, obtain optimal convergence rates class some additional regularity as quasi-strong convexity or strong convexity. We perform analysis continuous time ODE, and then transpose these results discrete optimization schemes. particular, propose variant algorithm which best state art rate first-order methods minimize composite nonsmooth functions.
منابع مشابه
Local Convergence of the Heavy-ball Method and iPiano for Non-convex Optimization
A local convergence result for abstract descent methods is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood and converges. This result allows algorithms to exploit local properties of the objective function: The gradient of the Moreau envelope of a prox-regular functions is locally Lipschitz continuous and expressible in terms of the proxim...
متن کاملProximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of...
متن کاملConvergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...
متن کاملA Stochastic Quasi-Newton Method for Online Convex Optimization
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperfor...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Siam Journal on Optimization
سال: 2022
ISSN: ['1095-7189', '1052-6234']
DOI: https://doi.org/10.1137/21m1403990